Current Issue : October - December Volume : 2019 Issue Number : 4 Articles : 5 Articles
Automatic vision inspection technology shows a high potential for quality inspection,\nand has drawn great interest in micro-armature manufacturing. Given that the inspection process is\nhighly influenced by the lack of real standardization and efficiency performed with the human eye,\nthus, it is necessary to develop an automatic defect detection process. In this work, an elaborated\nvision system for the defect inspection of micro-armatures used in smartphones was developed.\nIt consists of two parts, the front-end module and the deep convolution neural networks (DCNNs)\nmodule, which are responsible for different areas. The front-end module runs first and the DCNNs\nmodule will not run if the output of the front-end module is negative. To verify the application of\nthis system, an apparatus consisting of an objective table, control panel, and a camera connected to\na Personal Computer (PC) was used to simulate an industrial position of production. The results\nindicate that the developed vision system is capable of defect detection of micro-armatures....
Defacement attacks have long been considered one of prime threats to websites and web\napplications of companies, enterprises, and government organizations. Defacement attacks can bring\nserious consequences to owners of websites, including immediate interruption of website operations\nand damage of the owner reputation, which may result in huge financial losses. Many solutions\nhave been researched and deployed for monitoring and detection of website defacement attacks,\nsuch as those based on checksum comparison, diff comparison, DOM tree analysis, and complicated\nalgorithms. However, some solutions only work on static websites and others demand extensive\ncomputing resources. This paper proposes a hybrid defacement detection model based on the\ncombination of the machine learning-based detection and the signature-based detection. The machine\nlearning-based detection first constructs a detection profile using training data of both normal and\ndefaced web pages. Then, it uses the profile to classify monitored web pages into either normal or\nattacked. The machine learning-based component can effectively detect defacements for both static\npages and dynamic pages. On the other hand, the signature-based detection is used to boost the\nmodelâ??s processing performance for common types of defacements. Extensive experiments show that\nour model produces an overall accuracy of more than 99.26% and a false positive rate of about 0.27%.\nMoreover, our model is suitable for implementation of a real-time website defacement monitoring\nsystem because it does not demand extensive computing resources....
This paper describes the development of using virtual reality for work content in one\napplication area over a decade. Virtual reality technology has developed rapidly; from walk-in\nCAVE-like virtual environments to head-mounted displays within a decade. In this paper,\nthe development is studied through the lens of diffusion of innovation theory, which focuses\nnot only on innovation itself, but also on the social system. The development of virtual technology is\nstudied by one case, which is cabin design in the mobile work machine industry. This design process\nhas been especially suitable for using virtual reality technology....
Background: Avoidance to look others in the eye is a characteristic symptom of\nAutism Spectrum Disorders (ASD), and it has been hypothesised that quantitative monitoring\nof gaze patterns could be useful to objectively evaluate treatments. However,\ntools to measure gaze behaviour on a regular basis at a manageable cost are missing.\nIn this paper, we investigated whether a smartphone-based tool could address this\nproblem. Specifically, we assessed the accuracy with which the phone-based, state-ofthe-\nart eye-tracking algorithm iTracker can distinguish between gaze towards the eyes\nand the mouth of a face displayed on the smartphone screen. This might allow mobile,\nlongitudinal monitoring of gaze aversion behaviour in ASD patients in the future.\nResults: We simulated a smartphone application in which subjects were shown an\nimage on the screen and their gaze was analysed using iTracker. We evaluated the\naccuracy of our set-up across three tasks in a cohort of 17 healthy volunteers. In the\nfirst two tasks, subjects were shown different-sized images of a face and asked to alternate\ntheir gaze focus between the eyes and the mouth. In the last task, participants\nwere asked to trace out a circle on the screen with their eyes. We confirm that iTracker\ncan recapitulate the true gaze patterns, and capture relative position of gaze correctly,\neven on a different phone system to what it was trained on. Subject-specific bias can\nbe corrected using an error model informed from the calibration data. We compare\ntwo calibration methods and observe that a linear model performs better than a previously\nproposed support vector regression-based method.\nConclusions: Under controlled conditions it is possible to reliably distinguish between\ngaze towards the eyes and the mouth with a smartphone-based set-up. However,\nfuture research will be required to improve the robustness of the system to roll angle\nof the phone and distance between the user and the screen to allow deployment in a\nhome setting. We conclude that a smartphone-based gaze-monitoring tool provides\npromising opportunities for more quantitative monitoring of ASD....
Purpose: To explore imaging biomarkers that can be used for diagnosis and prediction of pathologic stage in\nnon-small cell lung cancer (NSCLC) using multiple machine learning algorithms based on CT image feature\nanalysis.\nMethods: Patients with stage IA to IV NSCLC were included, and the whole dataset was divided into training\nand testing sets and an external validation set. To tackle imbalanced datasets in NSCLC, we generated a new\ndataset and achieved equilibrium of class distribution by using SMOTE algorithm. The datasets were randomly\nsplit up into a training/testing set. We calculated the importance value of CT image features by means of\nmean decrease gini impurity generated by random forest algorithm and selected optimal features according\nto feature importance (mean decrease gini impurity >0.005). The performance of prediction model in training\nand testing sets were evaluated from the perspectives of classification accuracy, average precision (AP) score\nand precision-recall curve. The predictive accuracy of the model was externally validated using lung\nadenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) samples from TCGA database.\nResults: The prediction model that incorporated nine image features exhibited a high classification accuracy,\nprecision and recall scores in the training and testing sets. In the external validation, the predictive accuracy\nof the model in LUAD outperformed that in LUSC.\nConclusions: The pathologic stage of patients with NSCLC can be accurately predicted based on CT image\nfeatures, especially for LUAD. Our findings extend the application of machine learning algorithms in CT image\nfeature prediction for pathologic staging and identify potential imaging biomarkers that can be used for\ndiagnosis of pathologic stage in NSCLC patients....
Loading....